PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 17th of April, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:06:57] <trusktr> So that each user can have their own collections for their own use without conflicting with others?
[03:35:43] <ShortWave> hi all
[03:35:49] <ShortWave> Gotta question
[03:41:45] <joannac> ...are you going to tell us the question?
[03:44:53] <ShortWave> Yes
[03:45:22] <ShortWave> I'm using a projection to do a truncation with value - (mod value)
[03:45:37] <ShortWave> This is being done against a geospatial query
[03:45:42] <ShortWave> SOmetimes it brings back weird results
[03:45:51] <ShortWave> like most of the time, I get 4.52 miles or something like that
[03:45:54] <ShortWave> occasionally, I get
[03:45:58] <ShortWave> 4.5200000001
[03:49:03] <joannac> http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems ?
[04:38:09] <ioio> https://bugs.php.net/bug.php?id=67075&thanks=6
[04:38:31] <ioio> Download of "pecl/mongo" succeeded, but it is not a valid package archive
[04:39:12] <ioio> ubuntu14.04 PHP Version 5.5.9
[06:13:19] <odigem> hi
[06:13:35] <odigem> i can limit memory usage?
[06:22:29] <joannac> odigem: for what purpose?
[06:23:02] <odigem> joannac: i dnt work on computer if mongo eat all memory :D
[06:23:02] <joannac> are you seeing memory pressure?
[06:24:09] <odigem> joannac: mongo use all memory
[06:24:17] <joannac> if other processes need memory, the OS will release it
[06:24:31] <odigem> joannac: no
[06:24:41] <odigem> its windows, with fucking page file
[06:25:01] <joannac> Windows Server?
[06:25:02] <odigem> if os need memory os store not needed memory to fucking page file
[06:25:15] <odigem> joannac: no, simple windows
[06:25:33] <odigem> and make hard lags
[06:27:08] <odigem> so it is possible to limit memory usage?
[06:29:20] <odigem> maybe i can set no store indexes in ram?
[06:29:29] <odigem> or something else
[06:31:41] <joannac> get rid of your page file?
[06:32:10] <joannac> it seems kind of strange though
[06:32:15] <joannac> how large is your working size?
[06:32:25] <joannac> working set size?
[06:36:01] <odigem> 8gb , is i remember
[06:36:18] <joannac> how much RAM do you have?
[06:36:53] <odigem> 8 (((
[06:37:12] <joannac> so you don't have enough RAM to support your working set
[06:37:57] <odigem> for work enough , but need somethimes reboot mongo server
[06:39:59] <joannac> odigem: if you have 8gb of data you need to use for daily work (queries, inserts, whatever), and you only have 8gb of RAM (that you need to share with the OS and other processes), you don't have enough RAM to support mongodb, and you're going to be swapping all the time
[06:40:35] <odigem> I can not just leave it all for a long time without stopping prismotra.nado workflows and restart mongo
[06:40:38] <joannac> you can either reduce your working set (by eliminating unneeded indexes, etc), or you can periodically restart and start paging in again from scratch
[06:41:38] <odigem> by eliminating unneeded indexes, etc can more detail?
[06:42:02] <odigem> i no use select only insert
[06:42:07] <odigem> indexes me not need
[06:42:37] <odigem> almost
[06:44:36] <devkev> if you have indexes that are present, but not needed for your queries, you can drop them
[06:44:59] <odigem> how to?
[06:45:00] <devkev> but if you already have the minimum needed indexes, then you won't be able to do that
[06:48:46] <devkev> you can use these instructions to get a list of indexes: http://docs.mongodb.org/manual/tutorial/list-indexes/
[06:48:57] <devkev> and then these instructions to remove unneeded indexes: http://docs.mongodb.org/manual/tutorial/remove-indexes/
[07:30:01] <kevc> does anyone know how to reset the warning "you need to create a unique index on _id"
[07:30:20] <kevc> I've dropped the affected databases, but the warning still persists
[07:35:28] <joannac> kevc: restarted the mongod process, and you still get it in the logs?
[07:43:28] <fatih_> How can I query a field that's existing but is not null?
[07:43:29] <fatih_> http://docs.mongodb.org/manual/reference/operator/query/exists/
[07:43:47] <fatih_> this matches fields whose value is null
[07:44:04] <fatih_> however I need one that matches only if it's exist and is not null
[07:47:53] <jinmatt> fatih_: just do a normal query with where condition that field is not null
[07:48:14] <fatih_> if the field doesn't exist jinmatt ?
[07:48:22] <fatih_> than it should return anything right ?
[07:48:51] <jinmatt> no…as you are specifying that field shouldn’t be null
[07:49:57] <fatih_> ok thanks
[07:49:59] <jinmatt> faith_ try $exists: true, $nin: null
[07:51:38] <joannac> fatih_: http://pastebin.com/Y2Qkh7iA
[07:52:10] <fatih_> what if the $exists: true doesn't exists ?
[07:52:14] <fatih_> I've just thested with
[07:52:18] <fatih_> $ne: null
[07:52:25] <fatih_> and it was working
[07:52:26] <joannac> jinmatt: $nin expects an array, you would need to give it a 1-element array
[07:52:59] <jinmatt> guess [null] will work then
[07:53:03] <fatih_> like {foo :{$ne:null}
[07:53:15] <fatih_> If have to add exists too I'm going to add
[07:53:18] <fatih_> just curious if it's needed
[07:57:18] <joannac> probably not, querying on a field has an implicit $exists i think
[07:57:24] <joannac> test and see?
[08:08:01] <ioio> Download of "pecl/mongo" succeeded, but it is not a valid package archive
[08:08:01] <ioio> ubuntu14.04 PHP Version 5.5.9
[08:09:12] <Nodex> eaasier to use git no?
[08:10:10] <ioio> what?
[08:10:47] <Nodex> easier to use git no?
[08:12:06] <kevc> joannac: not tried that yet, still waiting on this syncing up. will give it a go, thanks
[08:14:49] <ioio> thanks, i try use git
[08:37:43] <fdf> Hi, I have sharded 3 nodes cluster with 2 replica sets when i run sh.status i can see partitioned:true but in chunks: i can see only one set is it ok ?
[09:16:57] <k_sze[work]> I'm getting 'not master' error when I try to mapReduce
[09:54:16] <k_sze[work]> I'm trying to perform an inline mapReduce and mongo still gives me 'not master' error.
[09:54:21] <k_sze[work]> What may be wrong?
[09:55:43] <kees_> are you working on the master? (aka, what does rs.status() say?)
[09:57:06] <k_sze[work]> kees_: I'm on the secondary, but inline map reduce is supposed to work on secondaries, no?
[09:57:22] <kees_> i guess not because it gives you that error
[09:57:49] <kees_> or your read preferences might be off
[09:58:29] <k_sze[work]> http://docs.mongodb.org/manual/reference/command/mapReduce/#output-inline
[09:59:22] <k_sze[work]> And even more strange is that the map reduce returns no result at all on the master.
[09:59:32] <k_sze[work]> It was still working like an hour ago.
[10:13:21] <k_sze[work]> Passing the sort option to mapReduce is resulting in no input.
[10:29:10] <jinmatt> whenever I do rs.add on ec2 instances on aws I get the following:
[10:29:25] <jinmatt> {
[10:29:26] <jinmatt> "errmsg" : "exception: set name does not match the set name host ec2-xx-xx-xx-xx.compute-1.amazonaws.com:27017 expects",
[10:29:27] <jinmatt> "code" : 13145,
[10:29:28] <jinmatt> "ok" : 0
[10:29:29] <jinmatt> }
[10:30:28] <jinmatt> anyone here have setup replica sets on aws ec2?
[10:43:41] <jinmatt> anyone?
[11:07:23] <_bart> Hi, someone should update the docs for Rails 3 http://docs.mongodb.org/ecosystem/tutorial/getting-started-with-ruby-on-rails-3/
[11:07:28] <_bart> the bson_ext is no longer a problem
[12:02:47] <izolate> is mydb a special db or can I delete it?
[12:04:25] <Derick> it is not a special db
[12:04:34] <Derick> but I can't say whether you can delete it of course.
[12:04:56] <izolate> cool, thanks
[12:40:56] <Lujeni> Hello - $inc and $addToSet can't be perform in the same update query ? Thx
[12:41:32] <MANCHUCK> Lujeni, I have done both in the same query
[12:42:11] <MANCHUCK> you might just have an issue with where the comma is in the object
[12:42:17] <MANCHUCK> i have issues like that all the time
[12:42:17] <Lujeni> MANCHUCK, ah u don't have any conflicting mode ?
[12:42:41] <MANCHUCK> if you paste the query i can give it a look over
[12:43:01] <Lujeni> MANCHUCK, ofc
[12:44:29] <joannac> jinmatt: yes
[12:44:48] <Nodex> normally you can't multiple things on the same key/value in a single update. However, some of this changed in 2.6
[12:45:22] <jinmatt> joannac: which distro did you use? Ubuntu?
[12:45:57] <jinmatt> joannac: did is set the hostname to ec2 public DNS or something like that?
[12:46:10] <jinmatt> joannac: *did you set
[12:46:32] <Lujeni> MANCHUCK, http://0bin.net/paste/pCWgynmb2cVXY5x8#FVQep8DjqNgsQ/iLGdqLdVExMVUY/JzYSphXSUoPw48=
[12:51:51] <izolate> new to mongo. is there a default user/pass?
[12:52:28] <Nodex> Lujeni [13:44:30] <Nodex> normally you can't multiple things on the same key/value in a single update. However, some of this changed in 2.6
[12:52:47] <jinmatt> izolate: no
[12:54:03] <Lujeni> Nodex, ok
[12:55:20] <Nodex> if you're on 2.6 then you should read the changelog to see what's allowed
[12:57:29] <joannac> jinmatt: i've done it with ubuntu, centos
[12:58:10] <joannac> jinmatt: if they're only running on one AWS accessibility zone you can get away with using internal IPs
[12:58:13] <joannac> otherwise external
[12:58:35] <jinmatt> joannac: any idea why I’m getting that execption error when I do rs.add() as given in the mongodb docs?
[12:59:15] <jinmatt> joannac: did you created a conf and passed it to rs.initiate(conf) or used the rs.add() method?
[12:59:20] <joannac> jinmatt: check the argument you gave --replSet
[13:00:49] <joannac> jinmatt: db.serverCmdLineOpts() on all your nodes, check the "replSet" value
[13:01:37] <jinmatt> joannac: I’ll try that, btw I specified replSet value in mongodb.conf
[13:02:17] <MANCHUCK> Lujeni, The problem is with activities.$.count
[13:02:27] <MANCHUCK> there is most likely no match based on your update
[13:03:27] <joannac> jinmatt: doesn't matter, run that command, it'll tell you what was parsed
[13:03:43] <jinmatt> joannac: ok
[13:16:30] <jinmatt> joannac: its parsed as rs0, rs1 and rs2
[13:16:46] <jinmatt> joannac: just as I have specified
[13:18:55] <joannac> um, no
[13:19:04] <joannac> the replSet parameter has to be the same for all 3
[13:19:14] <joannac> that's how they know they should be in the same set
[13:20:05] <jinmatt> joannac: oh I see, I thought they were used to identify each replica set member
[13:20:34] <jinmatt> joannac: thanks for pointing out, I’m pretty much new to mongo
[13:30:05] <leotr> Hello. I have a question. Imagine that we created new collection and filled it with N docs with `a` key equal to k for k from 1 to N. After that we do find() on that collection and slowly iterate over it (using sleep(1 sec) in cycle and print `a` value). During that we remove all docs with even `a` key values. What will be in output of iteration?
[13:30:49] <Nodex> eh?
[13:31:07] <Nodex> perhaps you could ask that in a less confusing way?
[13:31:21] <leotr> i mean will it still output from 1 to N or after that deletion it will output only odd values?
[13:32:24] <Nodex> http://docs.mongodb.org/manual/faq/concurrency/ <--- maybe that will help
[13:32:44] <leotr> ah., thanks
[13:33:57] <Nodex> Cursor Isolation: Because the cursor is not isolated during its lifetime, intervening write operations on a document may result in a cursor that returns a document more than once if that document has changed. To handle this situation, see the information on snapshot mode.
[13:34:09] <Nodex> that too ^, it has a "snapshot mode"
[13:49:34] <Lujeni> It's better to make two update with only an $inc for each update or 1 one update with the full document ?
[14:00:13] <cocotton> Hi everyone. I'm really new with mongo and I have the feeling I might have broke something. We have two cluster over here with 3 shards each. 1 primary, 1 secondary, 1 secondary (hidden). We needed to increase the disk size on each machine so following a procedure I've been given, I shutdowned a secondary node in each cluster, removed the disk on both machine, added a bigger one and started the machine again, creating the physical volume
[14:00:13] <cocotton> and everything else. Both the nodes seems to have joined the cluster without problem, they started synced. Now they seem to be done with the syncing since their stateStr is 'Secondary' and they have the same optime/optimeDate as the other one. Problem is, if I df on my machine, I now have less the space taken as before I removed the disk. It went from 200 ish Gb, to 89...
[14:00:46] <cocotton> I really feels like something's wrong here, but since I know almost nothing about mongo, I just want to make sure by asking you all
[14:03:30] <skot> No, that is not nec. a problem. When you build a new node like it compacts and removes dead space from the old db. So if you deleted collections or doc that would result in smaller nodes when adding new ones.
[14:04:18] <skot> You can check the db.stats() for (logical) counts and things.
[14:06:09] <arrty> my data is totally relational but i want to use mongodb anyways because of its schemaless nature and powerful querying system. am i going to regret this?
[14:06:10] <Waheedi> how can i force a replica set member to be primary if there was no other instances but it
[14:08:26] <cocotton> skot: Sorry got disconnected. Wow that is actually pretty nice. Is there a way to force a "cleaning" job rather than increasing disk size?
[14:09:18] <bob_123> hi all, I have a question: I'm on ubuntu 12.04 and doing `sudo apt-get install mongodb-10gen` only updated me to 2.4.10, so to go to 2.6.0 I did this `apt-get install mongodb-org=2.6.0 mongodb-org-server=2.6.0 mongodb-org-shell=2.6.0 mongodb-org-mongos=2.6.0 mongodb-org-tools=2.6.0`, I upgraded to 2.6.0 but now mongo no longer sees my database data, is there a way to get it to see my data again?
[14:09:43] <bob_123> I know the data is still on disk but mongo doesn't see it
[14:11:30] <rspijker> cocotton: it's to do with how mongo stores stuff on disk
[14:11:45] <rspijker> you can force cleaning by doing a db.repairDatabase()
[14:11:59] <rspijker> but that's not recommended in production
[14:12:27] <rspijker> it can get very bad though, I had a situation in production where a <1GB data set took up 200GB
[14:17:15] <cocotton> rspijker: Wow, that's crazy! :P
[14:24:28] <Nodex> bob_123 : some file names changed in 2.6
[14:24:48] <Nodex> the main one being the config file name, make sure your init script it reading the correct one
[14:25:59] <bob_123> oh that's probably it, the config file name
[14:26:20] <Nodex> it -> is
[14:26:51] <bob_123> do you know what the config file name is in 2.6?
[14:27:52] <Nodex> mongod.conf
[14:28:01] <Nodex> as opposed to mongodb.conf
[14:31:35] <bob_123> Nodex: thank you very much! that was it
[15:52:39] <bushart> Hi
[15:52:43] <bushart> А тут по русски можно? )
[15:53:16] <Nodex> niet
[15:54:10] <bushart> Can do like this on mongoDb: SELECT * FROM example GROUP BY param1 ORDER BY param2?
[15:54:58] <Derick> bushart: with the aggregation framework, yes
[15:56:14] <bushart> Derick: But what does label? The output of $group is not ordered.
[15:57:09] <bushart> Derick: it is from off man.
[15:57:30] <Derick> ?
[15:58:16] <bushart> Derick: See page http://docs.mongodb.org/manual/reference/operator/aggregation/group/
[15:58:45] <bushart> Derick: there it is written "The output of $group is not ordered. "
[15:58:59] <Derick> yes, so add a $sort too
[15:59:03] <bushart> Derick: What does this mean?
[15:59:08] <Derick> as another pipeline option
[15:59:52] <bushart> Derick: They will use the indexes?
[16:00:22] <Derick> no, that won't use an index.
[16:00:36] <Derick> it can't, as after a $group you don't have original documents (plus their index) anymore
[16:01:14] <bushart> Yes, I thought so. =(
[16:01:59] <bushart> Derick: This is a problem all databases?
[16:03:34] <bushart> Derick: I have 60 million records, which must be sorted and grouped. All these numerical. You did not hear about the tools that could help me?
[16:03:57] <Derick> bushart: hwo many items after the group?
[16:05:09] <bushart> Derick: one moment
[16:07:15] <Derick> I got to go though...
[16:07:40] <bushart> about 559
[16:07:53] <Derick> oh, don't worry about not using an index for sorting 500 items
[16:08:03] <Derick> it's a tiny amount!
[16:09:45] <bushart> What number should I bother?
[16:28:17] <tongcx> hi guys, is there POJO mapper for BSON?
[16:28:36] <tongcx> basically convert bson string to a object and back?
[16:28:38] <tongcx> In Java
[16:32:05] <skot> morphia
[16:32:57] <skot> and more here: http://docs.mongodb.org/ecosystem/drivers/java/
[16:33:52] <tongcx> thanks
[16:34:34] <tongcx> skot: it directly works with mongodb, but does it works with bson?
[16:35:52] <skot> you would have to use the mapper interface to do the conversion and load bson streams into special DBObjects which wrap the bson, but possible without much more.
[16:35:56] <skot> post to the list to get help.
[16:47:40] <pasichnyk> how are people finding stability on 2.6.0? I'm on 2.4.9 now, and trying to decide whether i should move to 2.4.10 or 2.6.0. Any feedback is appreciated.
[16:49:23] <skot> I'd wait till the 2.6.1 release if you think you might hit any of the known 2.6.0 bugs, but otherwise it seems pretty stable :)
[16:50:05] <skot> https://jira.mongodb.org/issues/?jql=project%20%3D%20SERVER%20AND%20fixVersion%20%3D%20%222.6.1%22%20ORDER%20BY%20status%20DESC%2C%20priority%20DESC
[16:54:21] <pasichnyk> skot, thanks for the heads up. Scanning through the bugs, it looks like this one is a show stopper for us: https://jira.mongodb.org/browse/SERVER-13516
[17:11:54] <outcoldman> Hi folks, what is the best approach to optimize performance of quires like {start:{$lte:600005}, end: {$gte:600005}} ?
[17:13:46] <cheeser> for starters, $eq that.
[17:13:54] <kali> cheeser: nope
[17:13:59] <kali> start and end
[17:14:12] <cheeser> oh, right. :)
[17:14:14] <cheeser> i thought that looke dodd.
[17:14:20] <cheeser> indexes
[17:14:30] <kali> outcoldman: the index intersection from 2.6 may help
[17:14:51] <cheeser> a compound index on start and end would too
[17:14:59] <outcoldman> kali, I'm on MongoDB 2.6
[17:15:00] <kali> cheeser: really ?
[17:15:12] <outcoldman> at current moment I have index on start and end
[17:15:49] <kali> cheeser: yeah, you're probably right, it allows to skip some document fetches, i guess
[17:15:57] <bushart> i got " Exceeded memory limit for $group, but didn't allow external sort. Pass allowDiskUse:true to opt in."
[17:16:08] <outcoldman> this is explain for this query http://pastebin.com/SkjLVvc0
[17:16:51] <bushart> How to fix it?
[17:17:38] <cheeser> use that option
[17:18:08] <bushart> how?
[17:21:23] <kali> it depends on the driver you're using, and chances are it's not supported yet
[17:23:05] <outcoldman> any thoughts on {start < 6000 < end} type of queries? index on {start,end} does not really fix the problem. This query still scan everything in index which is > 6000
[17:23:27] <cheeser> all the latest drivers should support allowDiskUse
[17:23:51] <cheeser> all least all the mongodb supported ones
[17:24:34] <kali> yeah, the official ones :)
[17:31:15] <mikebronner> can I import CSV files using mongoimport into embedded collections?
[17:35:49] <kali> mikebronner: nope
[17:36:23] <cheeser> what is an embedded collection?
[17:39:45] <mikebronner> thanks kali, I was afraid of as much
[17:49:43] <Chaos_Zero> Does anyone know if the number of replica set members will ever increase?
[17:49:54] <Chaos_Zero> Sorry, the max number.
[17:50:31] <BadHorsi1> http://pastebin.com/64bddzmF getting that strange behavion... seems like [{a:1},{a:2}] is becomeing [object Object] at some point...
[17:51:24] <kali> Chaos_Zero: unlikely
[17:55:00] <max> Hello!
[17:55:18] <Chaos_Zero> hi
[17:55:22] <sumaxi> Woops, name got changed automatically, xD
[17:56:00] <sumaxi> How do you sort the data from a find in order of submission?
[17:56:11] <sumaxi> I found: db.collection.find( { $query: {}, $orderby: { age : -1 } } ) but this doesn't work, even in shell.
[17:56:24] <Chaos_Zero> {'date':-1} ?
[17:56:24] <sumaxi> I can do age: -1 and age:1 with no difference.
[17:56:31] <cheeser> quotes around "$orderby"
[17:57:31] <sumaxi> cheeser: It made no difference unfortunately. :(
[17:58:00] <kali> sumaxi: wait. age ? is that a field in your documents ?
[17:58:40] <sumaxi> kali: Isn't it automatic? With http://docs.mongodb.org/manual/reference/operator/meta/orderby/ it's like it's a automatic value made by mongoDB, right?
[17:58:40] <cheeser> oh!
[17:58:44] <cheeser> i know why
[17:59:29] <cheeser> maybe...
[17:59:32] <kali> sumaxi: no. this doc assumes "age" is a field in the data
[17:59:47] <cheeser> use find() .sort({age:-1})
[18:00:00] <kali> cheeser: i'm afraid that won't help either :)
[18:00:08] <sumaxi> How would you do that when your putting in a callback?
[18:00:09] <cheeser> no?
[18:00:59] <kali> cheeser: sumaxi assumes "age" was some magic value maintained by mongodb
[18:01:04] <sumaxi> I've gotten some errors wheres I have the callback like: find({}, funciton (error,data) {...}).sort()
[18:01:15] <cheeser> oh. pfft.
[18:01:28] <sumaxi> And I assume this lovely magical values doesn't exist now. xD
[18:01:39] <kali> sumaxi: you may want to approximate that by sorting by the _id, if you use the default ObjectId
[18:04:30] <sumaxi> kali: Thanks!
[18:05:14] <sumaxi> kali: But will it always work? If ID is generated from MongoDB, will always be sorted correctly?
[18:06:03] <cheeser> time is a copmoent of ObjctID and thus is always increasing.
[18:06:11] <cheeser> god this lag is killing me.
[18:06:37] <sumaxi> cheeser: thanks, I didn't realize that
[18:08:30] <kali> sumaxi: ObjectId generated by different client in the same second may be in the wrong order. but if they are more than one second apart the order will be fine
[18:09:36] <sumaxi> kali: Ah, that's why you said approximate. :)
[18:10:41] <BadHorsi1> Argh. "It's the use of 'type' as a property name", well that's nice :)
[18:10:45] <Chaos_Zero> so, assuming a really far out idea, where I wanted more secondaries then 11, is their any other known method?
[18:22:39] <skot> Chaos_Zero: not really without recompiling to change the limit. There will (probably) be support for more than 12 in the next release or so (2.8/3.0) — https://jira.mongodb.org/browse/SERVER-3110 — watch to see the progress in the next few months.
[18:45:50] <hotch> I’m trying to understand better the size of my mongodb db. On a dump, the folder ./dump is 34mb. Nothing seemily. on db.stats(), what are the units (kb/mb/?)
[18:46:37] <cheeser> bytes, i think
[18:47:45] <hey`ladies`> I've got a document that has been corrupted. it now has multiple identical keys. This seems to indicate that that should not be possible: http://docs.mongodb.org/manual/core/document/
[18:48:12] <hey`ladies`> any advice for deduping that collection? any idea how this can happen?
[18:49:40] <hydrajump> hi would mongodb be the wrong database to use with a dovecot email server?
[18:49:42] <cheeser> do you have unique index on that field?
[18:52:08] <joshua> hydrajump: It looks like some people were trying it out last year http://www.dovecot.org/list/dovecot/2013-April/089405.html
[18:52:19] <hey`ladies`> cheeser: no. but this isn't the problem of having the same value in multipl documents. the problem is having multiple of the same key in a single document
[18:58:13] <hydrajump> joshua yeah doesn't look too promising. I'll try asking on #dovecot ;)
[19:09:01] <bob_123> hello, quick question for anyone using mongoskin on node: does the latest mongoskin support aggregate cursors for mongo 2.6 and I just can't find them, or are aggregate cursors not supported yet?
[19:09:14] <bob_123> I've been doing some googling but haven't found a definitive answer
[19:23:12] <bob_123> does any node.js driver for mongodb support aggregate cursors?
[19:23:44] <cozby> hi, I've been following the mongo docs however after adding/creating an admin user any action I take using the admin user gives me the error "Error: not master at src/mongo/shell/db.js:1260"
[19:24:05] <cozby> I'm not sure what to do next
[19:24:23] <cozby> my replica setup is dysfunctional
[19:24:26] <cozby> and I'm trying to fix this
[19:24:36] <cozby> but I can't as I keep getting that bloody error
[19:24:42] <cozby> is there a backdoor user?
[19:24:44] <cozby> like a root user
[19:24:56] <cozby> I'm essentially locked out - I think
[19:27:53] <cozby> nm - I just nuked whatever was in my /data folder (I just started so no love lost)
[20:17:44] <excalq> Why am I having such a dog of a time trying to compile a simple tutorial app with the mongo-c-driver (mac osx mountain-lion)?
[20:23:49] <wc-> hi all, im having trouble with the positional operator or the elemMatch operator in a find
[20:24:00] <wc-> i was expecting to get just the element in the array that matches the elemMatch
[20:24:04] <wc-> but im getting the whole list
[20:24:45] <wc-> my query looks something like db.blah.findOne({"filings.injection_records": {"$elemMatch": {"std_date": ISODate("2013-05-01T00:00:00Z")}}})
[20:25:16] <wc-> is there a way to get just the element in the array that matched my result?
[20:26:09] <boutell> Hi! I’m using the node mongodb-native driver. How can I get the version number of the server I’m talking to? I need to know if the new-style $text operator is gonna work. Thanks!
[20:34:59] <simpleAJ> I was wondering if anybody knew the storage that is needed to store a collection..possibly empty collection or just 1 or 2 2-4Kb documents each
[20:46:24] <excalq> Would be nice to have an update to http://api.mongodb.org/c/current/tutorial.html for 0.94
[21:05:08] <fresh2dev> hello, i'm trying to drop a database in 2.4 but it's always unauthorized. i have an admin user setup in the admin database with userAdminAnyDatabase. how do I grant myself permissions to drop a database?
[21:07:29] <fresh2dev> (seems not immediately obvious)
[21:08:59] <cozby> I without thinking restarted my replica master and now I can't even launch the mongo shell to connect to it
[21:09:09] <cozby> let alone start up the mongod proc agian
[21:10:10] <cozby> what do you do in this situation, I didn't follow the proper replica shutdown/restart process
[21:16:22] <daveops> cozby: is it throwing an error at you? what's in the logs?
[21:17:44] <cozby> 2014-04-17T21:15:51.349+0000 [initandlisten] ERROR: listen(): bind() failed errno:98 Address already in use for socket: 10.0.2.110:27017
[21:17:44] <cozby> 2014-04-17T21:15:51.349+0000 [initandlisten] ERROR: addr already in use
[21:17:58] <cozby> thats in the logs
[21:19:10] <cheeser> you already have something on that port
[21:19:24] <cozby> cheeser: thats incorrect
[21:19:33] <cozby> the only thing I had on that port was mongod
[21:19:37] <cozby> but that proc isn't even running
[21:20:46] <cheeser> clearly your operating system thinks otherwise.
[21:20:49] <cozby> sudo netstat -tulpn
[21:21:03] <cozby> when I run that I see nothing running on that port
[21:22:28] <cozby> cheeser:
[21:22:53] <cozby> cheeser: I got it, boner move by me, I had 0.0.0.0 and my 10 dot addy assigned to the bind_ip
[21:23:12] <cheeser> heh
[21:23:18] <cheeser> mystery solved at least
[22:03:03] <fresh2dev> if you call db.authenticate in the node.js driver is that per connection?
[22:03:32] <fresh2dev> http://mongodb.github.io/node-mongodb-native/api-generated/db.html#authenticate
[22:04:47] <fresh2dev> i keep getting: hu Apr 17 22:03:42.606 [conn23] authenticate db: user1 { authenticate: 1, user: "user1", nonce: "9f9ccb8f05f3346d", key: "3f91d8740744fc320260698d2ef724cb" }
[22:05:05] <fresh2dev> errr
[22:05:07] <fresh2dev> wrong error
[22:05:25] <fresh2dev> auth: bad nonce received or getnonce not called. could be a driver bug or a security attack. db:user0
[22:15:06] <asturel> there is no non-blocking insert in c++ driver?
[22:27:52] <fresh2dev> i guess db.authenticate in node.js driver is not multithreaded
[22:28:04] <fresh2dev> err, not thread safe
[23:36:45] <Jadenn> hello, i'm having issues with the $unset and $upsert operators with the php mongo driver.
[23:37:27] <Jadenn> I am using the multi => true setting, however it only removes columns from the first document
[23:55:08] <joannac> Jadenn: hmm.
[23:55:30] <joannac> verified that your query matches multiple documents?
[23:56:05] <Jadenn> nevermind, it was my mistake. it worked for one of the columns, but not for an object