PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 28th of August, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:10:13] <pm2> Hi, I'm playing around with using the mongoc library to access a database from C code. I'm having trouble storing a date object in the db. The docs say dates are stored as signed 64-bit ints as the # of milliseconds since the epoch. So in C I'm basically doing "$date": time(NULL)*1000 - but that seems to end up storing "\u001" in the database. Any idea on what I'm doing wrong here?
[00:11:35] <pm2> The code I'm working with is here: UTufUZbK
[00:11:47] <pm2> errmmmm.... http://pastebin.com/UTufUZbK
[00:29:13] <ramfjord> Hey guys
[00:29:18] <ramfjord> I'm stuck on mongo 2.4.6
[00:29:31] <ramfjord> is there a way to get an aggregate call to return a collection of documents instead of one giant one?
[00:30:08] <joannac> ramfjord: Upgrade to 2.6
[00:30:09] <joannac> http://docs.mongodb.org/master/release-notes/2.6/#aggregation-enhancements
[00:30:15] <ramfjord> yeah...
[00:30:18] <ramfjord> that's not going to be easy
[00:30:35] <ramfjord> but I saw it
[00:31:00] <joannac> Not in 2.4 though, you get back an array of documents
[00:41:22] <ramfjord> joannac: thanks
[01:26:57] <ramfjord> In the projection arguments for find, is there any way to rename a field to something else?
[01:27:17] <ramfjord> like there is in the $group pipeling operator in aggregation
[01:29:07] <Boomtime> http://docs.mongodb.org/manual/reference/operator/aggregation/project/#pipe._S_project
[01:29:52] <Boomtime> find just emits the documents it finds, you can use $project as the next stage to reform the document stream if you need to before feeding to the next stage
[01:30:04] <Boomtime> find = $match
[01:30:17] <ramfjord> you mean if I use aggregation instead of a traditional find
[01:30:38] <Boomtime> ah, sorry, you were asking about find, not aggregation
[01:30:39] <ramfjord> I may need to use a traditional one, at least until we upgrade to 2.6 to get a cursor back from aggregate
[01:31:18] <Boomtime> i do not believe there is a $project equivalent for find, only redaction and filter
[01:31:50] <ramfjord> mmmk, thanks
[01:34:15] <joannac> ramfjord: sounds like you want the equivalent of an SQL view...
[02:28:07] <tpayne> hello. I need to create a user on a new db called dev_myDb, but it says I need to be auth to create one. I have a admin user on the admin db which I can auth, but how do i make a user on dev_c0smo
[02:28:23] <tpayne> i mean dev_myDb
[02:29:53] <joannac> use admin; db.auth(...); use dev_myDb; db.createUser(...)
[02:30:26] <tpayne> tried that but i'll try it again
[02:31:12] <tpayne> getting this: Error: couldn't add user: User and role management commands require auth data to have schema version 3 but found 1 at src/mongo/shell/db.js:1004
[02:31:52] <tpayne> joannac: ^ forgot to mention you
[02:32:13] <joannac> db.version()
[02:32:24] <joannac> looks like you have 2.4 or maybe even 2.2 style users
[02:33:48] <tpayne> 2.6.0
[02:33:59] <joannac> upgrade from 2.4 ?
[02:34:26] <tpayne> i'm not sure
[02:34:28] <joannac> as in, did you upgrade from 2.4?
[02:34:43] <joannac> http://docs.mongodb.org/master/release-notes/2.6-upgrade-authorization/
[02:34:44] <tpayne> role for my admin user is userAdminAnyDatabase
[02:35:04] <joannac> auth and run db.getSiblingDB("admin").runCommand({authSchemaUpgrade: 1 });
[02:35:49] <tpayne> { "done" : true, "ok" : 1 }
[02:35:56] <joannac> now try and add your user
[02:36:15] <tpayne> wow that worked
[02:36:45] <tpayne> thanks!
[02:36:56] <joannac> np
[02:37:00] <tpayne> i would have never figured that out, goodnight
[04:27:07] <keeger> hello.
[04:28:08] <joannac> hi
[04:28:11] <keeger> i am considering using mongo for an application, but i come from a rdbms background and am wondering about whether mongo would fit my use case well
[04:28:53] <keeger> i have a lot of users who can schedule certain types of jobs to run at a future point in time, so i need to build some sort of queue system
[04:29:20] <keeger> in mongo, i think users would contain a collection of jobs.
[04:30:07] <keeger> seems simple enough. but how would my system be able to gather jobs at the right time? Ie, i poll every 5 seconds and go "give me jobs < now()"
[04:30:18] <keeger> but for all users
[04:30:33] <Boomtime> how many jobs might a user have? is there an upper limit?
[04:30:51] <keeger> no upper limit
[04:31:18] <Boomtime> you will allow a user to schedule say a 1000 jobs?
[04:31:48] <keeger> sure
[04:31:54] <keeger> i dont think that would be likely
[04:32:30] <Boomtime> it affects your data model.. if you allow users to just keep scheduling jobs with no end in sight, then your system needs to tolerate that
[04:32:34] <keeger> probably 10-12 jobs a day from each user. so 120k jobs a day
[04:32:45] <Boomtime> total jobs won't matter
[04:33:16] <Boomtime> per user however, you could put jobs for a single user in an array of a single document representing that user
[04:33:28] <Boomtime> but a single document must not exceed 16mb in bson
[04:33:40] <Boomtime> and arrays get clunky beyond about 1000 entries
[04:33:45] <keeger> hmm
[04:34:00] <keeger> so having 10k users is going to be clunky?
[04:34:00] <Boomtime> if you need to go beyond that, then you will need a document per job
[04:34:05] <Boomtime> no
[04:34:11] <Boomtime> user per document
[04:34:18] <Boomtime> that will take you to billions with no drama
[04:34:39] <Boomtime> how big would a job be, when defined as a single document?
[04:35:04] <keeger> job is probably 500 bytes in ram
[04:35:24] <Boomtime> the question is whether you have a seperate users collection and a jobs collection, or if you can store these together in a combined collection
[04:35:25] <keeger> date time and a string payload that's used in a callback
[04:35:43] <Boomtime> so a "job document" would be tiny
[04:36:08] <keeger> if it's standalone job doc, it would need a pointer back to the owner of the job
[04:36:19] <keeger> user _id i guess? so an int
[04:36:41] <Boomtime> you need to forget everything you know about sql, mongodb is not sql
[04:37:24] <Boomtime> there are no joins here, so every "reference to another collection" you make is just maintenance on you
[04:37:27] <keeger> i know. i actually think the user doc containing their jobs is a nice layout, which is far easier for mongo thatn sql
[04:37:45] <Boomtime> and it is what you should use if it is even remotely possible
[04:38:04] <keeger> but how do i have a system that checks when the job datetime is reached?
[04:38:32] <keeger> i can't spin a thread per user, that'd be pretty crazy
[04:40:10] <keeger> can i write a query that runs across the users->jobs collection and returns jobs based on datetime? that sounds expensive in mongodb but i could be wrong
[04:43:21] <keeger> for my app, 90% of the data is contained within the user, which is why i thought mongo fit very nicely. just a matter of figuring out the queue part :)
[04:55:48] <keeger> maybe i can utilize the Chapman project heh
[04:58:51] <Boomtime> why not just run a query every 5 minutes on the collection?
[04:59:20] <keeger> the users collection?
[04:59:33] <keeger> i mean, it'd be users->jobs
[05:01:22] <Boomtime> if you use a single collection, with a document structure like: { _id: <id>, username: "Some guy", jobs: [ { jid: <x>, cmd: "", at: <datetime> }, { jid: <y>, cmd: "", at: <datetime> } ] }
[05:02:02] <Boomtime> then you can index "jobs.at", and run a find on "$gt > now" every however many seconds
[05:02:15] <Boomtime> you ention 5 minutes
[05:03:14] <Boomtime> you could run an aggregation even to spin out each initial match into individual job documents for processing
[05:03:40] <Boomtime> you need to pop those job documents from the user document jobs array
[05:03:41] <keeger> group by jobs.at?
[05:03:53] <Boomtime> no need to group
[05:04:03] <keeger> sorry thought that's what aggregation was lol
[05:05:01] <Boomtime> so the trouble with a basic find() is that t finds each document only once, no matter how many jobs that document might contain which need to be started
[05:05:52] <Boomtime> you can either walk the jobs array on the client, looking for which jobs caused the match to occur, or use aggregation to find them and seperate them into their own document
[05:06:15] <Boomtime> alternatively, you can just create a collection with 1 job per document
[05:07:05] <keeger> that would get large though? 10k documents at a min
[05:07:13] <Boomtime> same result, it'll just be less space effecient on disk
[05:07:23] <Boomtime> 10k docs is nothing
[05:07:40] <keeger> so when you said the 1k array thing, you meant an array in a doc
[05:07:45] <keeger> not an actual collection itself
[05:07:49] <Boomtime> yes
[05:08:01] <Boomtime> jobs: [ 0, 1, 2, 3... ]
[05:08:14] <Boomtime> mongodb places no limit on the length of array per se
[05:08:33] <Boomtime> but handling it becomes very messy once it gets too big
[05:08:48] <keeger> gotcha
[05:09:21] <keeger> the aggregation would actually remove the job from the users document?
[05:09:35] <Boomtime> on the other hand, having a job collection means very tiny documents and requires you to maintain a "reference" to a users collection, a very sql like design
[05:09:55] <Boomtime> no, aggregation cannot make changes to documents
[05:10:05] <keeger> so it'd make a copy into the new document then
[05:10:11] <Boomtime> aggregation is a sophisticated form of find
[05:11:06] <Boomtime> for each job that you found you would run an update to remove that specific job entry, using the job id as part of the find
[05:11:58] <Boomtime> that way you can even multi-thread the retrieval of jobs and know that they can detect each other removing pending jobs, and each job would only be run once
[05:13:26] <keeger> that would be great. so i can aggregate to find just the jobs, and then mark the found jobs as removed? would the jobs have a pointer back to the user in their document?
[05:13:58] <keeger> i mean, if i do a callback, when it is done, i'd like to mark the job complete in the user's document somehow
[05:14:24] <keeger> i can either update the job (status flag) or make a completedjobs collection inside it
[05:24:31] <keeger> i think i might do the jobs collection, with some references ala sql. seems like a simple setup to me
[05:25:44] <keeger> i'm excited to learn mongo. :) thx for the help Boomtime
[07:14:09] <anubhav_> I having an issue connecting with mongodb
[07:14:12] <anubhav_> can someone help me
[07:14:20] <anubhav_> ?
[07:15:20] <Boomtime> what is the problem?
[07:15:48] <anubhav_> I am using pymongo
[07:16:03] <anubhav_> from my python console
[07:16:12] <anubhav_> if I try to run any querie
[07:16:13] <anubhav_> like
[07:16:19] <anubhav_> db.event.find().count()
[07:16:29] <anubhav_> it takes a hell lot of time
[07:16:38] <anubhav_> then I enter ctrl c
[07:16:43] <anubhav_> and then run the command again
[07:16:47] <anubhav_> it gives the result
[07:17:00] <anubhav_> someone on the first command it is not connecting to the server
[07:17:43] <Boomtime> have you tried the same thing using the mongo shell?
[07:19:07] <anubhav_> my mongo server
[07:19:13] <anubhav_> runs on a remote machine
[07:19:22] <anubhav_> to connect to it I use pymongo
[07:21:57] <Boomtime> so? test what you are trying to do using the mongo shell, it runs from anywhere
[07:22:24] <anubhav_> okay
[07:22:55] <Boomtime> it's just a test, i'm not asking you to switch to mongo shell, i'm trying to determine if the problem is pymongo or server-side
[07:23:21] <Boomtime> if you see the same problem in mongo shell, then it is at least not pymongo, maybe it is network etc
[07:24:10] <Boomtime> if the mongo shell does not have this problem then maybe the connection-string is a bit wonky
[08:37:11] <SubCreative> I just want to come in here and give my recommendations for mubsub for anyone needing to listen for changes on your collections.
[08:37:21] <SubCreative> I just started using it, and it's amazing
[08:53:32] <jordana> Looks really cool actually.
[08:58:16] <SubCreative> The only thing I can't seem to figure out is how to listen to changes to documents already inserted into the collection
[08:58:35] <SubCreative> mubsub will subscribe to a collection and listen for NEW documents
[08:59:15] <SubCreative> If anyone knew a method of doing that I would <3 you
[09:02:26] <jordana> I don't think you can
[09:02:28] <jordana> looking at the code
[09:02:32] <SubCreative> Actually I think you can..
[09:02:37] <SubCreative> https://www.npmjs.org/package/mongo-oplog
[09:02:45] <SubCreative> you can listen to oplog for event changes
[09:02:51] <jordana> Oh yeah
[09:02:53] <jordana> I thought you meant
[09:02:55] <SubCreative> update, delete, insert
[09:02:57] <jordana> in mubsub
[09:03:03] <SubCreative> Ahh yea, not in mubsub
[09:03:10] <jordana> Yeah you can listen to the oblog
[09:03:12] <jordana> oplog*
[09:03:14] <jordana> check this out..
[09:03:31] <jordana> https://github.com/EqualExperts/Tayra
[09:03:36] <jordana> It's a backup tool that listens to the oplog
[09:03:39] <jordana> it might be useful
[09:03:42] <SubCreative> Oo
[09:03:44] <SubCreative> Thanks
[09:04:11] <SubCreative> Appreciate this.
[09:04:41] <jordana> You probably only need to inspect how it listens to the oplog and take it from there but I'm happy to help you out if you need it
[09:05:52] <SubCreative> Thanks I think I should be ok with either mongo-live
[09:05:55] <SubCreative> or mongo-oplog
[09:06:19] <SubCreative> I'll let ya know If I run into any troubles.
[09:30:02] <SubCreative> jordana: You still there?
[09:30:35] <jordana> Always!
[09:30:44] <jordana> Except when I'm asleep
[09:30:52] <jordana> which is happening less and less....
[09:31:00] <SubCreative> Does mongo create oplog for non-replicaset instance?
[09:31:15] <SubCreative> Im using this mongo-oplogo package and not getting any results.
[09:31:27] <jordana> Is it a replicaset?
[09:31:32] <SubCreative> looking into mongo documentation it seems replicaset is for multiple instance server
[09:31:36] <jordana> I don't think Mongo does
[09:31:42] <jordana> if you've not got a replica set
[09:31:43] <SubCreative> I am running single instance
[09:31:43] <jordana> but
[09:32:09] <jordana> You could setup another instance that doesn't house any data and see if it does create an oplog
[09:32:52] <SubCreative> http://loosexaml.wordpress.com/2012/09/03/how-to-get-a-mongodb-oplog-without-a-full-replica-set/
[09:33:15] <jordana> Ahh
[09:33:17] <jordana> There you go
[09:33:22] <jordana> does that work?
[09:33:24] <SubCreative> Answeing all my own question as usual
[09:33:26] <SubCreative> Ill look
[09:33:28] <jordana> :)
[09:33:29] <SubCreative> it seems it would
[09:33:43] <jordana> I see no reason why not
[10:02:00] <SubCreative> jordana: got the oplog working
[10:02:06] <SubCreative> but still not working.
[10:02:16] <SubCreative> I see changes registered to op.log collection
[10:02:27] <jordana> but?
[10:02:49] <SubCreative> no response on from mongo-oplog
[10:02:56] <SubCreative> https://www.npmjs.org/package/mongo-oplog
[10:03:10] <SubCreative> im using the example essentially but with my own collection
[10:03:18] <SubCreative> nada
[10:05:22] <jordana> what's your replica set name?
[10:06:00] <SubCreative> rs0
[10:06:47] <jordana> can you try that example but name the replica set test?
[10:07:07] <SubCreative> you're referring to the post.test?
[10:07:08] <SubCreative> Namespace for emitting, namespace format is database + . + collection eg.(test.posts).
[10:07:28] <SubCreative> sorry test.post... that's referring to db+collection
[10:07:47] <jordana> Yeah I saw that but I'm not convinced that the replica set name has nothing to do with it
[10:07:59] <SubCreative> 1 second
[10:09:22] <SubCreative> no go
[10:09:47] <jordana> no exceptions or anything like that showing?
[10:09:49] <jordana> in node?
[10:10:50] <SubCreative> nope
[10:11:07] <SubCreative> Im using express and my submission to the collection is displayed
[10:11:15] <SubCreative> and new document inserted into collection
[10:11:18] <SubCreative> op.log updated
[10:11:39] <SubCreative> just not getting the node-oplog to work
[10:11:46] <SubCreative> sorry mongo-oplog
[10:13:20] <SubCreative> so strange
[10:13:24] <jordana> Does it have any debug out? Sorry I don't have time to try this on my machine right now :(
[10:13:30] <SubCreative> Its ok
[10:13:38] <SubCreative> No debug that I can see
[10:13:44] <SubCreative> well actually
[10:13:47] <jordana> Does it not say "starting to tail oplog"
[10:13:52] <jordana> when you run it?
[10:13:58] <SubCreative> no
[10:15:23] <jordana> I think it should
[10:15:25] <jordana> I can't see a debug mode
[10:15:31] <jordana> but in the source there should be output
[10:15:42] <SubCreative> I thought so too, I even added the oplog.tail() myself
[10:18:06] <jordana> It's using the debug package, and seeing as I can't see a wrapper around it inside the source I think it should definitely be outputing debug messages.
[10:18:52] <jordana> You said you're using express
[10:18:56] <jordana> have you go something like this
[10:18:56] <SubCreative> yup
[10:18:58] <jordana> process.on('uncaughtException', function (exception) {
[10:18:58] <jordana> console.log(exception); // to see your exception details in the console
[10:18:58] <jordana> // if you are on production, maybe you can send the exception details to your
[10:18:58] <jordana> // email as well ?
[10:18:58] <jordana> });
[10:19:03] <jordana> woospie*
[10:19:06] <jordana> shouldn't have done that
[10:21:08] <SubCreative> I have that, nothing.
[10:22:45] <SubCreative> stupid thing hates me
[10:23:00] <SubCreative> i literally followed it word for word
[10:23:49] <jordana> What are you running it on?
[10:24:02] <SubCreative> mac
[10:24:33] <jordana> I'll try it in 30 minutes and let you know, you haven't solved it by then!
[10:24:54] <jordana> if*
[10:32:07] <SubCreative> I got it
[10:32:25] <SubCreative> So, op.log is in the collection local
[10:32:47] <SubCreative> i had my mongodb connection pointed to my actual collection
[10:32:54] <SubCreative> derp
[10:34:29] <SubCreative> Thanks for your efforts to help, I hope I can extend the favor one day :-)
[10:43:30] <Zelest> The "syncingTo" field in rs.status() is seriously a poor choice of words..
[10:50:40] <jordana> SubCreative: Anytime! :)
[11:59:12] <nevan> hi anybody working with ruby driver...i'm facing issue with some mongodb command being run from ruby
[11:59:38] <nevan> i want to get a list of all running queries via ruby
[13:12:30] <nickpresta> When would I want to use an Arbiter in my even replica set? Why not just use an additional mongod that holds data? Is it strictly a "arbiters don't need dedicated hardware" thing?
[13:13:17] <kali> nickpresta: yes, just that
[13:14:14] <nickpresta> kali: thanks. How terrible would it be to use an additional data-storing mongod (on crappier hardware) with a lower priority (so it can't elect to be primary) instead of an arbiter?
[13:15:23] <cheeser> you should read the documentation on setting those priorities. there are subtle gotchas involved.
[13:16:10] <kali> nickpresta: think about the polar bears
[13:16:58] <nickpresta> kali: agreed, but due to requirements I cannot run an arbiter on another machine I have in my cluster -- it has to be "dedicated" anyways... :-(
[13:17:01] <nickpresta> cheeser: thanks. will do
[13:17:58] <kali> nickpresta: well, just add a tiny server and run an arbiter on it
[13:18:20] <kali> nickpresta: what's the point of making it a full data node ?
[13:19:50] <nickpresta> kali: I was thinking I may be able to use it as a read slave for analytics or something similar
[13:20:54] <kali> well, that can work
[13:32:32] <Const> Hello, do you know how to use something like $unwind on sub-documents?
[13:33:35] <kali> Const: mmm yeah... it's not rocket science, so i assume you have something that does not work to show us :)
[13:34:18] <Const> Yes Kali, of course...
[13:36:21] <Const> It's link to the PHP connector, As I use $user['$inc']["history.${index}.${value}.all"] = 1 to increment a value, when the value does not exists, it create it, but not as an array: http://pastebin.com/3qvfAqji
[13:36:51] <Const> Then, I'd need to perform some aggregates on it, to return most seen author per example
[13:37:22] <Const> But I can't, as it's not considered as an array
[13:39:18] <kali> Const: i'm fraid you'll have to redesign
[13:39:28] <kali> Const: it's a bad idea to use "variable" keys
[13:39:51] <Const> why, kali?
[13:39:53] <kali> Const: it makes dozens of things awkward all over the place
[13:40:28] <Const> ah but it's not variables like, user's input. it's just because I use it into a loop
[13:40:41] <Const> I define the indexes
[13:41:07] <kali> Const: everything in the aggregation framework, the query language and optimiser are designed with document where the key name are keywords
[13:42:08] <kali> authors: [ { author: "lordVador", counter:{} }, ... ] should work much better
[13:42:54] <Const> ahh
[13:42:54] <kali> with counter (or "counters" containing the four counters you currently have)
[13:43:17] <Const> ok I think I understand.
[13:44:30] <kali> the write ops will be one tad more complex: you'll have to use the positional operator ($) to make the update
[13:44:47] <kali> but in the end, it should simplify many things
[13:53:08] <edrocks> if i have a collection named teams can i make one named "teams.members"?
[13:53:57] <jordana> ye
[13:53:59] <jordana> yes*
[13:54:03] <kali> edrocks: it's legal, but i would not recommend it
[13:54:22] <kali> edrocks: use "team_members"
[13:54:25] <edrocks> kali: any reason why?
[13:55:04] <kali> edrocks: because the shell will get confused... you'll have to use db["team.members"].find() instead of db.team_members.find()
[13:55:26] <edrocks> o ok thanks ill use that then
[13:55:30] <kali> edrocks: not to mention your fellow developpers
[13:55:47] <edrocks> well its just me but that does sound like a pain
[13:55:49] <jordana> kali: In what sense? You can do db.teams.members.find()
[13:56:24] <kali> jordana: really ? this one works ?
[13:56:36] <jordana> kali, yes I have many namespaces like that
[13:56:48] <jordana> From my point of view it's actually better
[13:57:39] <kali> indeed, it does work
[13:57:43] <kali> sorry
[13:57:45] <edrocks> db.system.indexes.find() works but it might be special case for system collections
[13:57:59] <jordana> edrocks: No, it works for anything
[13:58:00] <edrocks> ok thats good i like the . better then _
[13:58:23] <kali> I actually can never remember which one of database or collection can have a dot in its name, so i find dot in collection names confusing
[13:59:03] <cheeser> agreed
[13:59:09] <edrocks> do you make multiple dbs for one project or just different collections? I've been using multiple collections in one db
[13:59:24] <Serge> Hey
[13:59:26] <jordana> I guess it's preference
[13:59:46] <Serge> Is there any way to access original document data from a double $group aggregation?
[14:00:16] <Serge> pretty much the second $group loses all data that it does not create, and $project wont let me access it
[14:01:10] <kali> edrocks: it can make sense to group collections in more than one database when the project becomes bigger
[14:01:59] <edrocks> kali: ok i was just wondering because i couldnt think of any real reason besides the namespace limit but even that you can make bigger. It would make sense though if you do have a ton of collections
[14:02:50] <kali> edrocks: yeah, jsut keeping it clean
[14:03:30] <kali> edrocks: it may also be argued that it is a workaround for the lock-per-database issue
[14:04:11] <edrocks> is there any reason why they didnt make it a lock per collection?
[14:04:19] <edrocks> or are they working on that?
[14:04:34] <kali> edrocks: they're working on getting finer-grained locks
[14:04:46] <kali> edrocks: but you need a huge load to make it an actual issue
[14:04:56] <edrocks> ok
[14:07:24] <Const> Kali, I managed to get this into my document: http://pastebin.com/VWnUuBfP but now, I don't know how to target it fo the înc. You know, the path with dot, like history.authors.all but to target where "authors" : "myvalue"
[14:07:50] <Const> (maybe I need to add that my $inc is added to a bulk update=
[14:08:35] <ssarah> what's up with this error guys? http://pastebin.com/HWA0MfiX it's probably something obvious i'm missing
[14:09:12] <edrocks> Const: I think you would use {"$inc":{"counter.all":"1"}} (format may be wrong but your probably not using counter.all)
[14:10:04] <kali> Const: mmmm... you may need two queries actually... one to add the author if it's not there, and then one to $inc
[14:10:07] <Const> edrocks, ok, but I want to increase counter.all only where "authors" = myvalue
[14:10:35] <Const> kali, that's what I thought, I already add a $addToSet to create it
[14:10:49] <edrocks> {"authors":"someval",{your inc document here}}
[14:10:49] <Const> but then, in my $inc, how to find the good one?
[14:11:01] <kali> Const: the second one you can write by matching the author in the first find() argument, and using the positional operator (.$.) to refer to it in the $inc
[14:11:15] <kali> Const: search the doc for the positional operator
[14:13:54] <Const> right kali, but I think I can't. Here more information: it's a script who add multiple authors to the document, but to update thousands of document from the collection. So I'm using a bulk (in php MongoUpdateBatch()). So the $inc is added to the batch, in a loop where it add other $inc. How could I add a query to the $inc, then?
[14:15:41] <kali> Const: have you checked out what the positional operator does yet ?
[14:16:35] <Serge> Is it possible to use double $project in aggregation framework?
[14:17:17] <Const> yes kali, I found example like db.collection.update({versions.v:'some_version'},{"$inc":{"versions.$.count":1}}); But, within a batch update, where hould I put the query to match the index?
[14:19:42] <kali> Const: well, each update in the bulk comes with a criteria and a modifier, right ?
[14:20:06] <Const> yes of course, but into each update, I have multiple $inc
[14:20:08] <dberry> If I migrate a replica set(move config servers, promote secondaries) will the shard info in sh.status() reflect the changes or do I need to add new shards too?
[14:20:28] <kali> Const: you mean multiple $inc for multiple authors ?
[14:22:43] <Const> well, exaclty: foe each update, there is mor than one increase of authors. For exeample, I read a list of posts favorites by users. So yes, for each user, there is more than one post, so I must add author of each post.
[14:22:50] <Const> Hmm I hope I'm clear :-/
[14:23:13] <kali> afraid you'll have to flatten these updates in the bulk :/
[14:23:32] <kali> Const: i though you were just updating the 4 ou 5 counters for one author in each of them
[14:23:51] <Const> no, then it would have been too easy :p
[14:24:38] <Const> ah I have an idea...let's try... :)
[15:47:41] <Const> kali, what I tried did not work... but I may find something. For now, if I add my $inc using something like history.authors.me.all: 1, history.authors.you.all: 1
[15:47:59] <Const> history won't be an array, but a sud-doc
[15:48:09] <Const> Do you know how to force it as an array?
[15:49:17] <Const> I'd like "authors" to be an index of the array "history"
[17:22:53] <felixjet__> how i can import a mongodump backup to a different database?
[17:23:33] <felixjet__> --dbpath?
[17:25:42] <felixjet__> wait no, thats to specify a path
[17:25:45] <felixjet__> i want to specify the database
[20:03:26] <ssarah> hei guys i started my replicate set and i'm getting this [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
[20:03:30] <ssarah> over and over
[20:03:32] <ssarah> is it normal?
[20:08:40] <digicyc> ssarah: Taking a guess at this, but can it hit your conf server(s)?
[20:08:46] <digicyc> Or do you have conf servers setup?
[20:12:16] <ssarah> digicyc: no wait, i thought it was weird because i had to do"export LC_ALL=C" for it to start. but now i just ssh into another shell and mongo -> rs.initiate() and it seems happy.
[20:12:48] <digicyc> ahh I was off anyways now that I looked a bit further. ;)
[20:48:51] <Chaos_Zero> is it possible to limit a user so that they can only insert but not delete or modify documents?
[21:15:58] <Chaos_Zero> how can I make a service like auditd available to start but not run on startup?
[21:56:08] <SpNg> I’m trying to use the aggregation framework to see how many records are created on a date. Currently the “created” field is using Unix time stamps. Is there a way to convert these to mongo dates?
[21:56:44] <cheeser> why would you need to? just query directly against that value.
[21:58:51] <SpNg> cheeser: what’s the best way to do aggregation against that value? can I write a function for the grouping?
[22:00:04] <cheeser> convert the date you're looking for to a UNIX timestamp and query with that.
[22:01:00] <SpNg> cheeser: I’m looking to take the last week of records and calculate the number of records created on each day.
[22:01:20] <cheeser> so really you need to group by day, then, yeah?
[22:01:25] <SpNg> I can do it in code, but I was hoping to use mongo’s aggregation framework. maybe it’s not the right tool for this though
[23:24:19] <ruby-lang654> hi facing problem with ruby mongodb driver....
[23:27:01] <ruby-lang654> has anybody tried running db.currentOp from a ruby script ?